Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.551
Filtrar
1.
Genome Biol ; 25(1): 106, 2024 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-38664753

RESUMO

Centrifuger is an efficient taxonomic classification method that compares sequencing reads against a microbial genome database. In Centrifuger, the Burrows-Wheeler transformed genome sequences are losslessly compressed using a novel scheme called run-block compression. Run-block compression achieves sublinear space complexity and is effective at compressing diverse microbial databases like RefSeq while supporting fast rank queries. Combining this compression method with other strategies for compacting the Ferragina-Manzini (FM) index, Centrifuger reduces the memory footprint by half compared to other FM-index-based approaches. Furthermore, the lossless compression and the unconstrained match length help Centrifuger achieve greater accuracy than competing methods at lower taxonomic levels.


Assuntos
Compressão de Dados , Metagenômica , Compressão de Dados/métodos , Metagenômica/métodos , Software , Genoma Microbiano , Genoma Bacteriano , Análise de Sequência de DNA/métodos
2.
PLoS One ; 19(4): e0301622, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38630695

RESUMO

This paper proposes a reinforced concrete (RC) boundary beam-wall system that requires less construction material and a smaller floor height compared to the conventional RC transfer girder system. The structural performance of this system subjected to axial compression was evaluated by performing a structural test on four specimens of 1/2 scale. In addition, three-dimensional nonlinear finite element analysis was also performed to verify the effectiveness of the boundary beam-wall system. Three test parameters such as the lower wall length-to-upper wall length ratio, lower wall thickness, and stirrup details of the lower wall were considered. The load-displacement curve was plotted for each specimen and its failure mode was identified. The test results showed that decrease in the lower wall length-to-upper wall length ratio significantly reduced the peak strength of the boundary beam-wall system and difference in upper and lower wall thicknesses resulted in lateral bending caused by eccentricity in the out-of-plane direction. Additionally, incorporating cross-ties and reducing stirrup spacing in the lower wall significantly improved initial stiffness and peak strength, effectively minimizing stress concentration.


Assuntos
Materiais de Construção , Compressão de Dados , Análise de Elementos Finitos , Fenômenos Físicos
3.
PLoS One ; 19(4): e0288296, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38557995

RESUMO

Network traffic prediction is an important network monitoring method, which is widely used in network resource optimization and anomaly detection. However, with the increasing scale of networks and the rapid development of 5-th generation mobile networks (5G), traditional traffic forecasting methods are no longer applicable. To solve this problem, this paper applies Long Short-Term Memory (LSTM) network, data augmentation, clustering algorithm, model compression, and other technologies, and proposes a Cluster-based Lightweight PREdiction Model (CLPREM), a method for real-time traffic prediction of 5G mobile networks. We have designed unique data processing and classification methods to make CLPREM more robust than traditional neural network models. To demonstrate the effectiveness of the method, we designed and conducted experiments in a variety of settings. Experimental results confirm that CLPREM can obtain higher accuracy than traditional prediction schemes with less time cost. To address the occasional anomaly prediction issue in CLPREM, we propose a preprocessing method that minimally impacts time overhead. This approach not only enhances the accuracy of CLPREM but also effectively resolves the real-time traffic prediction challenge in 5G mobile networks.


Assuntos
Compressão de Dados , Redes Neurais de Computação , Algoritmos , Previsões
4.
Sci Rep ; 14(1): 7650, 2024 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-38561346

RESUMO

This study presents an advanced metaheuristic approach termed the Enhanced Gorilla Troops Optimizer (EGTO), which builds upon the Marine Predators Algorithm (MPA) to enhance the search capabilities of the Gorilla Troops Optimizer (GTO). Like numerous other metaheuristic algorithms, the GTO encounters difficulties in preserving convergence accuracy and stability, notably when tackling intricate and adaptable optimization problems, especially when compared to more advanced optimization techniques. Addressing these challenges and aiming for improved performance, this paper proposes the EGTO, integrating high and low-velocity ratios inspired by the MPA. The EGTO technique effectively balances exploration and exploitation phases, achieving impressive results by utilizing fewer parameters and operations. Evaluation on a diverse array of benchmark functions, comprising 23 established functions and ten complex ones from the CEC2019 benchmark, highlights its performance. Comparative analysis against established optimization techniques reveals EGTO's superiority, consistently outperforming its counterparts such as tuna swarm optimization, grey wolf optimizer, gradient based optimizer, artificial rabbits optimization algorithm, pelican optimization algorithm, Runge Kutta optimization algorithm (RUN), and original GTO algorithms across various test functions. Furthermore, EGTO's efficacy extends to addressing seven challenging engineering design problems, encompassing three-bar truss design, compression spring design, pressure vessel design, cantilever beam design, welded beam design, speed reducer design, and gear train design. The results showcase EGTO's robust convergence rate, its adeptness in locating local/global optima, and its supremacy over alternative methodologies explored.


Assuntos
Nativos do Alasca , Compressão de Dados , Lagomorpha , Animais , Humanos , Coelhos , Gorilla gorilla , Algoritmos , Benchmarking
5.
Sensors (Basel) ; 24(7)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38610365

RESUMO

High-quality cardiopulmonary resuscitation (CPR) and training are important for successful revival during out-of-hospital cardiac arrest (OHCA). However, existing training faces challenges in quantifying each aspect. This study aimed to explore the possibility of using a three-dimensional motion capture system to accurately and effectively assess CPR operations, particularly about the non-quantified arm postures, and analyze the relationship among them to guide students to improve their performance. We used a motion capture system (Mars series, Nokov, China) to collect compression data about five cycles, recording dynamic data of each marker point in three-dimensional space following time and calculating depth and arm angles. Most unstably deviated to some extent from the standard, especially for the untrained students. Five data sets for each parameter per individual all revealed statistically significant differences (p < 0.05). The correlation between Angle 1' and Angle 2' for trained (rs = 0.203, p < 0.05) and untrained students (rs = -0.581, p < 0.01) showed a difference. Their performance still needed improvement. When conducting assessments, we should focus on not only the overall performance but also each compression. This study provides a new perspective for quantifying compression parameters, and future efforts should continue to incorporate new parameters and analyze the relationship among them.


Assuntos
Reanimação Cardiopulmonar , Compressão de Dados , Humanos , Estudos de Viabilidade , Captura de Movimento , China
6.
J Acoust Soc Am ; 155(4): 2589-2602, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38607268

RESUMO

The processing and perception of amplitude modulation (AM) in the auditory system reflect a frequency-selective process, often described as a modulation filterbank. Previous studies on perceptual AM masking reported similar results for older listeners with hearing impairment (HI listeners) and young listeners with normal hearing (NH listeners), suggesting no effects of age or hearing loss on AM frequency selectivity. However, recent evidence has shown that age, independently of hearing loss, adversely affects AM frequency selectivity. Hence, this study aimed to disentangle the effects of hearing loss and age. A simultaneous AM masking paradigm was employed, using a sinusoidal carrier at 2.8 kHz, narrowband noise modulation maskers, and target modulation frequencies of 4, 16, 64, and 128 Hz. The results obtained from young (n = 3, 24-30 years of age) and older (n = 10, 63-77 years of age) HI listeners were compared to previously obtained data from young and older NH listeners. Notably, the HI listeners generally exhibited lower (unmasked) AM detection thresholds and greater AM frequency selectivity than their NH counterparts in both age groups. Overall, the results suggest that age negatively affects AM frequency selectivity for both NH and HI listeners, whereas hearing loss improves AM detection and AM selectivity, likely due to the loss of peripheral compression.


Assuntos
Compressão de Dados , Surdez , Perda Auditiva , Humanos , Mascaramento Perceptivo
7.
BMC Genomics ; 25(1): 266, 2024 Mar 09.
Artigo em Inglês | MEDLINE | ID: mdl-38461245

RESUMO

BACKGROUND: DNA storage has the advantages of large capacity, long-term stability, and low power consumption relative to other storage mediums, making it a promising new storage medium for multimedia information such as images. However, DNA storage has a low coding density and weak error correction ability. RESULTS: To achieve more efficient DNA storage image reconstruction, we propose DNA-QLC (QRes-VAE and Levenshtein code (LC)), which uses the quantized ResNet VAE (QRes-VAE) model and LC for image compression and DNA sequence error correction, thus improving both the coding density and error correction ability. Experimental results show that the DNA-QLC encoding method can not only obtain DNA sequences that meet the combinatorial constraints, but also have a net information density that is 2.4 times higher than DNA Fountain. Furthermore, at a higher error rate (2%), DNA-QLC achieved image reconstruction with an SSIM value of 0.917. CONCLUSIONS: The results indicate that the DNA-QLC encoding scheme guarantees the efficiency and reliability of the DNA storage system and improves the application potential of DNA storage for multimedia information such as images.


Assuntos
Algoritmos , Compressão de Dados , Reprodutibilidade dos Testes , DNA/genética , Compressão de Dados/métodos , Processamento de Imagem Assistida por Computador/métodos
8.
Sci Rep ; 14(1): 5087, 2024 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-38429300

RESUMO

When traditional EEG signals are collected based on the Nyquist theorem, long-time recordings of EEG signals will produce a large amount of data. At the same time, limited bandwidth, end-to-end delay, and memory space will bring great pressure on the effective transmission of data. The birth of compressed sensing alleviates this transmission pressure. However, using an iterative compressed sensing reconstruction algorithm for EEG signal reconstruction faces complex calculation problems and slow data processing speed, limiting the application of compressed sensing in EEG signal rapid monitoring systems. As such, this paper presents a non-iterative and fast algorithm for reconstructing EEG signals using compressed sensing and deep learning techniques. This algorithm uses the improved residual network model, extracts the feature information of the EEG signal by one-dimensional dilated convolution, directly learns the nonlinear mapping relationship between the measured value and the original signal, and can quickly and accurately reconstruct the EEG signal. The method proposed in this paper has been verified by simulation on the open BCI contest dataset. Overall, it is proved that the proposed method has higher reconstruction accuracy and faster reconstruction speed than the traditional CS reconstruction algorithm and the existing deep learning reconstruction algorithm. In addition, it can realize the rapid reconstruction of EEG signals.


Assuntos
Compressão de Dados , Aprendizado Profundo , Processamento de Sinais Assistido por Computador , Compressão de Dados/métodos , Algoritmos , Eletroencefalografia/métodos
9.
Neural Netw ; 174: 106220, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38447427

RESUMO

Structured pruning is a representative model compression technology for convolutional neural networks (CNNs), aiming to prune some less important filters or channels of CNNs. Most recent structured pruning methods have established some criteria to measure the importance of filters, which are mainly based on the magnitude of weights or other parameters in CNNs. However, these judgment criteria lack explainability, and it is insufficient to simply rely on the numerical values of the network parameters to assess the relationship between the channel and the model performance. Moreover, directly utilizing these pruning criteria for global pruning may lead to suboptimal solutions, therefore, it is necessary to complement search algorithms to determine the pruning ratio for each layer. To address these issues, we propose ARPruning (Attention-map-based Ranking Pruning), which reconstructs a new pruning criterion as the importance of the intra-layer channels and further develops a new local neighborhood search algorithm for determining the optimal inter-layer pruning ratio. To measure the relationship between the channel to be pruned and the model performance, we construct an intra-layer channel importance criterion by considering the attention map for each layer. Then, we propose an automatic pruning strategy searching method that can search for the optimal solution effectively and efficiently. By integrating the well-designed pruning criteria and search strategy, our ARPruning can not only maintain a high compression rate but also achieve outstanding accuracy. In our work, it is also experimentally concluded that compared with state-of-the-art pruning methods, our ARPruning method is capable of achieving better compression results. The code can be obtained at https://github.com/dozingLee/ARPruning.


Assuntos
Algoritmos , Compressão de Dados , Redes Neurais de Computação
10.
Bioinformatics ; 40(4)2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38530800

RESUMO

MOTIVATION: The full automation of digital neuronal reconstruction from light microscopic images has long been impeded by noisy neuronal images. Previous endeavors to improve image quality can hardly get a good compromise between robustness and computational efficiency. RESULTS: We present the image enhancement pipeline named Neuronal Image Enhancement through Noise Disentanglement (NIEND). Through extensive benchmarking on 863 mouse neuronal images with manually annotated gold standards, NIEND achieves remarkable improvements in image quality such as signal-background contrast (40-fold) and background uniformity (10-fold), compared to raw images. Furthermore, automatic reconstructions on NIEND-enhanced images have shown significant improvements compared to both raw images and images enhanced using other methods. Specifically, the average F1 score of NIEND-enhanced reconstructions is 0.88, surpassing the original 0.78 and the second-ranking method, which achieved 0.84. Up to 52% of reconstructions from NIEND-enhanced images outperform all other four methods in F1 scores. In addition, NIEND requires only 1.6 s on average for processing 256 × 256 × 256-sized images, and images after NIEND attain a substantial average compression rate of 1% by LZMA. NIEND improves image quality and neuron reconstruction, providing potential for significant advancements in automated neuron morphology reconstruction of petascale. AVAILABILITY AND IMPLEMENTATION: The study is conducted based on Vaa3D and Python 3.10. Vaa3D is available on GitHub (https://github.com/Vaa3D). The proposed NIEND method is implemented in Python, and hosted on GitHub along with the testing code and data (https://github.com/zzhmark/NIEND). The raw neuronal images of mouse brains can be found at the BICCN's Brain Image Library (BIL) (https://www.brainimagelibrary.org). The detailed list and associated meta information are summarized in Supplementary Table S3.


Assuntos
Compressão de Dados , Neurônios , Animais , Camundongos , Tomografia Computadorizada por Raios X/métodos , Aumento da Imagem , Encéfalo , Processamento de Imagem Assistida por Computador/métodos
11.
IEEE Trans Image Process ; 33: 2502-2513, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38526904

RESUMO

Residual coding has gained prevalence in lossless compression, where a lossy layer is initially employed and the reconstruction errors (i.e., residues) are then losslessly compressed. The underlying principle of the residual coding revolves around the exploration of priors based on context modeling. Herein, we propose a residual coding framework for 3D medical images, involving the off-the-shelf video codec as the lossy layer and a Bilateral Context Modeling based Network (BCM-Net) as the residual layer. The BCM-Net is proposed to achieve efficient lossless compression of residues through exploring intra-slice and inter-slice bilateral contexts. In particular, a symmetry-based intra-slice context extraction (SICE) module is proposed to mine bilateral intra-slice correlations rooted in the inherent anatomical symmetry of 3D medical images. Moreover, a bi-directional inter-slice context extraction (BICE) module is designed to explore bilateral inter-slice correlations from bi-directional references, thereby yielding representative inter-slice context. Experiments on popular 3D medical image datasets demonstrate that the proposed method can outperform existing state-of-the-art methods owing to efficient redundancy reduction. Our code will be available on GitHub for future research.


Assuntos
Compressão de Dados , Compressão de Dados/métodos , Imageamento Tridimensional/métodos
12.
Sci Rep ; 14(1): 6209, 2024 03 14.
Artigo em Inglês | MEDLINE | ID: mdl-38485967

RESUMO

Efficient and rapid auxiliary diagnosis of different grades of lung adenocarcinoma is conducive to helping doctors accelerate individualized diagnosis and treatment processes, thus improving patient prognosis. Currently, there is often a problem of large intra-class differences and small inter-class differences between pathological images of lung adenocarcinoma tissues under different grades. If attention mechanisms such as Coordinate Attention (CA) are directly used for lung adenocarcinoma grading tasks, it is prone to excessive compression of feature information and overlooking the issue of information dependency within the same dimension. Therefore, we propose a Dimension Information Embedding Attention Network (DIEANet) for the task of lung adenocarcinoma grading. Specifically, we combine different pooling methods to automatically select local regions of key growth patterns such as lung adenocarcinoma cells, enhancing the model's focus on local information. Additionally, we employ an interactive fusion approach to concentrate feature information within the same dimension and across dimensions, thereby improving model performance. Extensive experiments have shown that under the condition of maintaining equal computational expenses, the accuracy of DIEANet with ResNet34 as the backbone reaches 88.19%, with an AUC of 96.61%, MCC of 81.71%, and Kappa of 81.16%. Compared to seven other attention mechanisms, it achieves state-of-the-art objective metrics. Additionally, it aligns more closely with the visual attention of pathology experts under subjective visual assessment.


Assuntos
Adenocarcinoma de Pulmão , Adenocarcinoma , Compressão de Dados , Neoplasias Pulmonares , Humanos , Benchmarking , Neoplasias Pulmonares/diagnóstico
13.
Nat Commun ; 15(1): 2376, 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38491032

RESUMO

Despite the growing interest of archiving information in synthetic DNA to confront data explosion, quantitatively querying the data stored in DNA is still a challenge. Herein, we present Search Enabled by Enzymatic Keyword Recognition (SEEKER), which utilizes CRISPR-Cas12a to rapidly generate visible fluorescence when a DNA target corresponding to the keyword of interest is present. SEEKER achieves quantitative text searching since the growth rate of fluorescence intensity is proportional to keyword frequency. Compatible with SEEKER, we develop non-collision grouping coding, which reduces the size of dictionary and enables lossless compression without disrupting the original order of texts. Using four queries, we correctly identify keywords in 40 files with a background of ~8000 irrelevant terms. Parallel searching with SEEKER can be performed on a 3D-printed microfluidic chip. Overall, SEEKER provides a quantitative approach to conducting parallel searching over the complete content stored in DNA with simple implementation and rapid result generation.


Assuntos
Compressão de Dados , Ferramenta de Busca
14.
PLoS One ; 19(3): e0297154, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38446783

RESUMO

This study introduces a novel concrete-filled tube (CFT) column system featuring a steel tube comprised of four internal triangular units. The incorporation of these internal triangular units serves to reduce the width-thickness ratio of the steel tube and augment the effective confinement area of the infilled concrete. This design enhancement is anticipated to result in improved structural strength and ductility, contributing to enhanced overall performance and sustainability. To assess the effectiveness of the newly proposed column system, a full-scale test was conducted on five square steel tube column specimens subjected to axial compression. Among these specimens, two adhered to the conventional steel tube column design, while the remaining three featured the new CFT columns with internal triangular units. The shape of the CFT column, the presence of infilled concrete and the presence of openings on the ITUs were considered as test parameters. The test results reveal that the ductility of the newly proposed CFT column system exhibited a minimum 30% improvement compared to the conventional CFT column. In addition, the initial stiffness and axial compressive strength of the new system were found to be comparable to those of the conventional CFT column.


Assuntos
Compressão de Dados , Força Compressiva , Fenômenos Físicos , Aço , Resistência à Tração
15.
Neural Netw ; 174: 106250, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38531122

RESUMO

Snapshot compressive hyperspectral imaging necessitates the reconstruction of a complete hyperspectral image from its compressive snapshot measurement, presenting a challenging inverse problem. This paper proposes an enhanced deep unrolling neural network, called EDUNet, to tackle this problem. The EDUNet is constructed via the deep unrolling of a proximal gradient descent algorithm and introduces two innovative modules for gradient-driven update and proximal mapping reflectivity. The gradient-driven update module leverages a memory-assistant descent approach inspired by momentum-based acceleration techniques, for enhancing the unrolled reconstruction process and improving convergence. The proximal mapping is modeled by a sub-network with a cross-stage spectral self-attention, which effectively exploits the inherent self-similarities present in hyperspectral images along the spectral axis. It also enhances feature flow throughout the network, contributing to reconstruction performance gain. Furthermore, we introduce a spectral geometry consistency loss, encouraging EDUNet to prioritize the geometric layouts of spectral curves, leading to a more precise capture of spectral information in hyperspectral images. Experiments are conducted using three benchmark datasets including KAIST, ICVL, and Harvard, along with some real data, comprising a total of 73 samples. The experimental results demonstrate that EDUNet outperforms 15 competing models across four metrics including PSNR, SSIM, SAM, and ERGAS.


Assuntos
Compressão de Dados , Imageamento Hiperespectral , Fenômenos Físicos , Algoritmos , Movimento (Física)
16.
Sci Rep ; 14(1): 5168, 2024 03 02.
Artigo em Inglês | MEDLINE | ID: mdl-38431641

RESUMO

Magnetic resonance imaging is a medical imaging technique to create comprehensive images of the tissues and organs in the body. This study presents an advanced approach for storing and compressing neuroimaging informatics technology initiative files, a standard format in magnetic resonance imaging. It is designed to enhance telemedicine services by facilitating efficient and high-quality communication between healthcare practitioners and patients. The proposed downsampling approach begins by opening the neuroimaging informatics technology initiative file as volumetric data and then planning it into several slice images. Then, the quantization hiding technique will be applied to each of the two consecutive slice images to generate the stego slice with the same size. This involves the following major steps: normalization, microblock generation, and discrete cosine transformation. Finally, it assembles the resultant stego slice images to produce the final neuroimaging informatics technology initiative file as volumetric data. The upsampling process, designed to be completely blind, reverses the downsampling steps to reconstruct the subsequent image slice accurately. The efficacy of the proposed method was evaluated using a magnetic resonance imaging dataset, focusing on peak signal-to-noise ratio, signal-to-noise ratio, structural similarity index, and Entropy as key performance metrics. The results demonstrate that the proposed approach not only significantly reduces file sizes but also maintains high image quality.


Assuntos
Compressão de Dados , Telemedicina , Humanos , Compressão de Dados/métodos , Imageamento por Ressonância Magnética/métodos , Neuroimagem , Razão Sinal-Ruído
17.
IUCrJ ; 11(Pt 2): 190-201, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38327201

RESUMO

Serial crystallography (SX) has become an established technique for protein structure determination, especially when dealing with small or radiation-sensitive crystals and investigating fast or irreversible protein dynamics. The advent of newly developed multi-megapixel X-ray area detectors, capable of capturing over 1000 images per second, has brought about substantial benefits. However, this advancement also entails a notable increase in the volume of collected data. Today, up to 2 PB of data per experiment could be easily obtained under efficient operating conditions. The combined costs associated with storing data from multiple experiments provide a compelling incentive to develop strategies that effectively reduce the amount of data stored on disk while maintaining the quality of scientific outcomes. Lossless data-compression methods are designed to preserve the information content of the data but often struggle to achieve a high compression ratio when applied to experimental data that contain noise. Conversely, lossy compression methods offer the potential to greatly reduce the data volume. Nonetheless, it is vital to thoroughly assess the impact of data quality and scientific outcomes when employing lossy compression, as it inherently involves discarding information. The evaluation of lossy compression effects on data requires proper data quality metrics. In our research, we assess various approaches for both lossless and lossy compression techniques applied to SX data, and equally importantly, we describe metrics suitable for evaluating SX data quality.


Assuntos
Algoritmos , Compressão de Dados , Cristalografia , Compressão de Dados/métodos , Tomografia Computadorizada por Raios X
18.
Gene ; 907: 148235, 2024 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-38342250

RESUMO

Next Generation Sequencing (NGS) technology generates massive amounts of genome sequence that increases rapidly over time. As a result, there is a growing need for efficient compression algorithms to facilitate the processing, storage, transmission, and analysis of large-scale genome sequences. Over the past 31 years, numerous state-of-the-art compression algorithms have been developed. The performance of any compression algorithm is measured by three main compression metrics: compression ratio, time, and memory usage. Existing k-mer hash indexing systems take more time, due to the decision-making process based on compression results. In this paper, we propose a two-phase reference genome compression algorithm using optimal k-mer length (RGCOK). Reference-based compression takes advantage of the inter-similarity between chromosomes of the same species. RGCOK achieves this by finding the optimal k-mer length for matching, using a randomization method and hashing. The performance of RGCOK was evaluated on three different benchmark data sets: novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), Homo sapiens, and other species sequences using an Amazon AWS virtual cloud machine. Experiments showed that the optimal k-mer finding time by RGCOK is around 45.28 min, whereas the time for existing state-of-the-art algorithms HiRGC, SCCG, and HRCM ranges from 58 min to 8.97 h.


Assuntos
Compressão de Dados , Software , Humanos , Compressão de Dados/métodos , Algoritmos , Genoma , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Análise de Sequência de DNA/métodos
19.
Magn Reson Imaging ; 108: 116-128, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38325727

RESUMO

To improve the efficiency of multi-coil data compression and recover the compressed image reversibly, increasing the possibility of applying the proposed method to medical scenarios. A deep learning algorithm is employed for MR coil compression in the presented work. The approach introduces a variable augmentation network for invertible coil compression (VAN-ICC). This network utilizes the inherent reversibility of normalizing flow-based models. The aim is to enhance the readability of the sentence and clearly convey the key components of the algorithm. By applying the variable augmentation technology to image/k-space variables from multi-coils, VAN-ICC trains the invertible network by finding an invertible and bijective function, which can map the original data to the compressed counterpart and vice versa. Experiments conducted on both fully-sampled and under-sampled data verified the effectiveness and flexibility of VAN-ICC. Quantitative and qualitative comparisons with traditional non-deep learning-based approaches demonstrated that VAN-ICC carries much higher compression effects. The proposed method trains the invertible network by finding an invertible and bijective function, which improves the defects of traditional coil compression method by utilizing inherent reversibility of normalizing flow-based models. In addition, the application of variable augmentation technology ensures the implementation of reversible networks. In short, VAN-ICC offered a competitive advantage over other traditional coil compression algorithms.


Assuntos
Compressão de Dados , Compressão de Dados/métodos , Imageamento por Ressonância Magnética/métodos , Algoritmos , Processamento de Imagem Assistida por Computador/métodos
20.
Sci Rep ; 14(1): 3207, 2024 02 08.
Artigo em Inglês | MEDLINE | ID: mdl-38332238

RESUMO

Many previous studies have investigated visual distance perception, especially for small to moderate distances. Few experiments, however, have evaluated the perception of large distances (e.g., 100 m or more). The studies that have been conducted have found conflicting results (diametrically opposite conclusions). In the current experiment, the functions relating actual and perceived distance were obtained for sixteen adult observers using the method of equal appearing intervals. These functions relating perceived and actual distance were obtained for outdoor viewing in a typical University environment-the experiment was conducted along a sidewalk adjacent to a typical street where campus buildings, trees, street signs, etc., were visible. The overall results indicated perceptual compression of distances in depth so that the stimulus distance intervals appeared significantly shorter than the actual (physical) distance intervals. It is important to note, however, that there were sizeable individual differences-the judgments of half of the observers were relatively accurate, whereas the judgments of the remaining half were inaccurate to varying degrees. The results of the experiment demonstrate that there is no single function that describes how human observers visually perceive large distance intervals in outdoor environments.


Assuntos
Compressão de Dados , Percepção Visual , Adulto , Humanos , Percepção de Distância , Julgamento , Individualidade , Percepção de Profundidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...